26 research outputs found
Evaluating the Robustness of Trigger Set-Based Watermarks Embedded in Deep Neural Networks
Trigger set-based watermarking schemes have gained emerging attention as they
provide a means to prove ownership for deep neural network model owners. In
this paper, we argue that state-of-the-art trigger set-based watermarking
algorithms do not achieve their designed goal of proving ownership. We posit
that this impaired capability stems from two common experimental flaws that the
existing research practice has committed when evaluating the robustness of
watermarking algorithms: (1) incomplete adversarial evaluation and (2)
overlooked adaptive attacks.
We conduct a comprehensive adversarial evaluation of 10 representative
watermarking schemes against six of the existing attacks and demonstrate that
each of these watermarking schemes lacks robustness against at least two
attacks. We also propose novel adaptive attacks that harness the adversary's
knowledge of the underlying watermarking algorithm of a target model. We
demonstrate that the proposed attacks effectively break all of the 10
watermarking schemes, consequently allowing adversaries to obscure the
ownership of any watermarked model. We encourage follow-up studies to consider
our guidelines when evaluating the robustness of their watermarking schemes via
conducting comprehensive adversarial evaluation that include our adaptive
attacks to demonstrate a meaningful upper bound of watermark robustness
Effective Targeted Attacks for Adversarial Self-Supervised Learning
Recently, unsupervised adversarial training (AT) has been highlighted as a
means of achieving robustness in models without any label information. Previous
studies in unsupervised AT have mostly focused on implementing self-supervised
learning (SSL) frameworks, which maximize the instance-wise classification loss
to generate adversarial examples. However, we observe that simply maximizing
the self-supervised training loss with an untargeted adversarial attack often
results in generating ineffective adversaries that may not help improve the
robustness of the trained model, especially for non-contrastive SSL frameworks
without negative examples. To tackle this problem, we propose a novel positive
mining for targeted adversarial attack to generate effective adversaries for
adversarial SSL frameworks. Specifically, we introduce an algorithm that
selects the most confusing yet similar target example for a given instance
based on entropy and similarity, and subsequently perturbs the given instance
towards the selected target. Our method demonstrates significant enhancements
in robustness when applied to non-contrastive SSL frameworks, and less but
consistent robustness improvements with contrastive SSL frameworks, on the
benchmark datasets.Comment: NeurIPS 202
Revisiting Binary Code Similarity Analysis using Interpretable Feature Engineering and Lessons Learned
Binary code similarity analysis (BCSA) is widely used for diverse security
applications such as plagiarism detection, software license violation
detection, and vulnerability discovery. Despite the surging research interest
in BCSA, it is significantly challenging to perform new research in this field
for several reasons. First, most existing approaches focus only on the end
results, namely, increasing the success rate of BCSA, by adopting
uninterpretable machine learning. Moreover, they utilize their own benchmark
sharing neither the source code nor the entire dataset. Finally, researchers
often use different terminologies or even use the same technique without citing
the previous literature properly, which makes it difficult to reproduce or
extend previous work. To address these problems, we take a step back from the
mainstream and contemplate fundamental research questions for BCSA. Why does a
certain technique or a feature show better results than the others?
Specifically, we conduct the first systematic study on the basic features used
in BCSA by leveraging interpretable feature engineering on a large-scale
benchmark. Our study reveals various useful insights on BCSA. For example, we
show that a simple interpretable model with a few basic features can achieve a
comparable result to that of recent deep learning-based approaches.
Furthermore, we show that the way we compile binaries or the correctness of
underlying binary analysis tools can significantly affect the performance of
BCSA. Lastly, we make all our source code and benchmark public and suggest
future directions in this field to help further research.Comment: 22 pages, under revision to Transactions on Software Engineering
(July 2021
What Mobile Ads Know About Mobile Users
Abstract We analyze the software stack of popular mobile advertising libraries on Android and investigate how they protect the users of advertising-supported apps from malicious advertising. We find that, by and large, Android advertising libraries properly separate the privileges of the ads from the host app by confining ads to dedicated browser instances that correctly apply the same origin policy. We then demonstrate how malicious ads can infer sensitive information about users by accessing external storage, which is essential for media-rich ads in order to cache video and images. Even though the same origin policy prevents confined ads from reading other apps' externalstorage files, it does not prevent them from learning that a file with a particular name exists. We show how, depending on the app, the mere existence of a file can reveal sensitive information about the user. For example, if the user has a pharmacy price-comparison app installed on the device, the presence of external-storage files with certain names reveals which drugs the user has looked for. We conclude with our recommendations for redesigning mobile advertising software to better protect users from malicious advertising
DiffCSP: Finding Browser Bugs in Content Security Policy Enforcement through Differential Testing
The Content Security Policy (CSP) is one of the de facto security mechanisms that mitigate web threats. Many websites have been deploying CSPs mainly to mitigate cross-script scripting (XSS) attacks by instructing client browsers to constrain JavaScript (JS) execution. However, a browser bug in CSP enforcement enables an adversary to bypass a deployed CSP, posing a security threat. As the CSP specification evolves, CSP becomes more complicated in supporting an increasing number of directives, which brings additional complexity to implementing correct enforcement behaviors. Unfortunately, the finding of CSP enforcement bugs in a systematic way has been largely understudied.
In this paper, we propose DiffCSP, the first differential testing framework to find CSP enforcement bugs regarding JS execution. DiffCSP generates CSPs and a comprehensive set of HTML instances that exhibit all known ways of executing JS snippets. DiffCSP then executes each HTML instance for each generated policy across different browsers, thereby collecting inconsistent execution results. To analyze a large volume of the execution results, we leverage a decision tree and identify common causes of the observed inconsistencies. We demonstrate the efficacy of DiffCSP by finding 29 security bugs and eight functional bugs. We also show that three bugs are due to unclear descriptions of the CSP specification.
We further identify the common root causes of CSP enforcement bugs, such as incorrect CSP inheritance and hash handling. Moreover, we confirm the risky trend of client browsers deriving completely different interpretations from the same CSPs, which raises security concerns. Our study demonstrates the effectiveness of DiffCSP for identifying CSP enforcement bugs, and our findings contributed to patching six security bugs in major browsers, including Chrome and Safari
Secure Multi-Execution in Android
Mobile operating systems, such as Googleâs Android, have become a fixed part of our daily lives and are entrusted with a plethora of private information. Congruously, their data protection mechanisms have been improved steadily over the last decade and, in particular, for Android, the research community has explored various enhancements and extensions to the access control model. However, the vast majority of those solutions has been concerned with controlling the access to data, but equally important is the question of how to control the flow of data once released. Ignoring control over the dissemination of data between applications or between components of the same app, opens the door for attacks, such as permission re-delegation or privacy-violating third-party libraries. Controlling information flows is a long-standing problem, and one of the most recent and practical-oriented approaches to information flow control is secure multi-execution.
In this paper, we present Ariel, the design and implementation of an IFC architecture for Android based on the secure multi-execution of apps. Ariel demonstrably extends Androidâs system with support for executing multiple instances of apps, and it is equipped with a policy lattice derived from the protection levels of Androidâs permissions as well as an I/O scheduler to achieve control over data flows between application instances. We demonstrate how secure multi-execution with Ariel can help to mitigate two prominent attacks on Android, permission re-delegations and malicious advertisement libraries
Recommended from our members
Toward better server-side Web security
textServer-side Web applications are constantly exposed to new threats as new technologies emerge. For instance, forced browsing attacks exploit incomplete access-control enforcement to perform security-sensitive operations (such as database writes without proper permission) by invoking unintended program entry points. SQL command injection attacks (SQLCIA) have evolved into NoSQL command injection attacks targeting the increasingly popular NoSQL databases. They may expose internal data, bypass authentication or violate security and privacy properties. Preventing such Web attacks demands defensive programming techniques that require repetitive and error-prone manual coding and auditing. This dissertation presents three methods for improving the security of server-side Web applications against forced browsing and SQL/NoSQL command injection attacks. The first method finds incomplete access-control enforcement. It statically identifies access-control logic that mediates security-sensitive operations and finds missing access-control checks without an a priori specification of an access-control policy. Second, we design, implement and evaluate a static analysis and program transformation tool that finds access-control errors of omission and produces candidate repairs. Our third method dynamically identifies SQL/NoSQL command injection attacks. It computes shadow values for tracking user-injected values and then parses a shadow value along with the original database query in tandem with its shadow value to identify whether user-injected parts serve as code. Remediating Web vulnerabilities and blocking Web attacks are essential for improving Web application security. Automated security tools help developers remediate Web vulnerabilities and block Web attacks while minimizing error-prone human factors. This dissertation describes automated tools implementing the proposed ideas and explores their applications to real-world server-side Web applications. Automated security tools are effective for identifying server-side Web application security holes and a promising direction toward better server-side Web security.Computer Science
The Hitchhikerâs Guide to DNS Cache Poisoning
Abstract. DNS cache poisoning is a serious threat to todayâs Internet. We develop a formal model of the semantics of DNS caches, including the bailiwick rule and trust-level logic, and use it to systematically investigate different types of cache poisoning and to generate templates for attack payloads. We explain the impact of the attacks on DNS resolvers such as BIND, MaraDNS, and Unbound and their implications for several defenses against DNS cache poisoning. Key words: DNS, cache poisoning, formal model